![]() Video signal processing method and device
专利摘要:
An image decoding method according to the present invention may comprise: a step of inducing spatial merge candidates of a current block; a step of creating a merge candidate list for the current block on the basis of the spatial merge candidates; a step of obtaining motion information of the current block on the basis of the merge candidate list; and a step of performing motion compensation with respect to the current block by using the motion information. Here, if the current block is not in a predefined shape or is not equal to or greater than a predefined size, the spatial merge candidates of the current block can be induced based on a block, containing the current block, in the predetermined shape or equal to or greater than the predetermined size. 公开号:ES2703458A2 申请号:ES201990009 申请日:2017-08-03 公开日:2019-03-08 发明作者:Bae Keun Lee 申请人:KT Corp; IPC主号:
专利说明:
[0001] [0002] Method and apparatus for processing video signals [0003] [0004] Technical field [0005] [0006] The present invention relates to a method and an apparatus for processing video signals. [0007] [0008] Background of the technique [0009] [0010] Recently, the demand for high-resolution, high-quality images such as high-definition (HD) images and ultra-high-definition (UHD) images has increased in several fields of application. However, image data of higher resolution and quality have increasing amounts of data compared to conventional image data. Therefore, when transmitting image data using a medium such as conventional and wireless broadband networks, or when image data is stored using a conventional storage medium, transmission and storage costs increase. To solve these problems that occur with an increase in the resolution and quality of the image data, high efficiency image coding / decoding techniques can be used. [0011] [0012] Image compression technology includes several techniques, among them: a prediction interprediction technique of a pixel value included in a current image from a previous or subsequent image of the current image; an intra-predictive technique for predicting a pixel value included in a current image using pixel information in the current image; an entropy coding technique of assigning a short code to a value with a high occurrence frequency and assigning a long code to a value with a low occurrence frequency; etc. The image data can be effectively compressed by using said image compression technology, and can be transmitted or stored. [0013] [0014] Meanwhile, with the demand for high resolution images, the demand for stereographic image content, which is a new image service, It has also increased. A video compression technique is being discussed to effectively provide stereographic image content with high resolution and ultra high resolution. [0015] [0016] Divulgation [0017] Technical problem [0018] [0019] An object of the present invention is to provide a method and an apparatus for efficiently performing interprediction for a target coding / decoding block in the encoding / decoding of a video signal. [0020] [0021] It is an object of the present invention to provide a method and apparatus for deriving a fusion candidate based on a block having a predetermined shape or a predetermined size for encoding / decoding a video signal. [0022] [0023] An object of the present invention is to provide a method and an apparatus for performing a parallel fusion in a unit of a predetermined shape or a predetermined size in the encoding / decoding of a video signal. [0024] [0025] The technical objectives to be achieved by the present invention are not limited to the technical problems mentioned above. And other technical problems that are not mentioned will be understood clearly for those skilled in the art from the following description. [0026] [0027] Technical solution [0028] [0029] A method and apparatus for decoding a video signal according to the present invention can derive a spatial fusion candidate for a current block, generate a list of fusion candidates for the current block based on the spatial fusion candidate, obtain information of movement for the current block in the list of fusion candidates, and perform movement compensation for the current block using the movement information. [0030] [0031] A method and apparatus for encoding a video signal according to the present invention can derive a spatial fusion candidate for a current block, generate a list of merge candidates for the current block based on the space fusion candidate, obtain movement information for the current block in the list of merge candidates, and perform Motion compensation for the current block using the movement information. [0032] [0033] In the method and apparatus for encoding / decoding a video signal according to the present invention, if the current block does not have a predefined shape or does not have a size equal to or greater than a predefined size, the merging candidate of the current block it may be derived based on a block having the predefined shape or having a size equal to or greater than the predefined size, the block comprising the current block. [0034] [0035] In the method and apparatus for encoding / decoding a video signal according to the present invention, the predefined shape can be a square shape. [0036] [0037] In the method and apparatus for encoding / decoding a video signal according to the present invention, the current block may have the same spatial merge candidate as a neighboring block included in the block of the square shape with the current block. [0038] [0039] In the method and apparatus for encoding / decoding a video signal according to the present invention, if the current block and the spatial fusion candidate are included in the same fusion estimation region, it can be determined that the fusion candidate Space is not available. [0040] [0041] In the method and apparatus for encoding / decoding a video signal according to the present invention, the melting estimation region may have a non-square shape. [0042] [0043] In the method and apparatus for encoding / decoding a video signal according to the present invention, if the melting estimation region has a non-square shape, there may be a number of candidate forms of fusion candidates that the estimation region can be restricted by a predefined number. [0044] The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the invention which follows, but do not limit the scope of the invention. [0045] [0046] Advantageous effects [0047] [0048] In accordance with the present invention, efficient interpredication can be performed for a target coding / decoding block. [0049] [0050] In accordance with the present invention, a fusion candidate can be derived based on a block having a predetermined shape or a predetermined size. [0051] [0052] According to the present invention, a fusion can be performed in parallel in a unit of a predetermined shape or a predetermined size. [0053] [0054] According to the present invention, the intra-prediction for a coding / decoding target block can be performed by selecting at least one of a plurality of reference lines. [0055] [0056] In accordance with the present invention, a reference line may be derived based on a block having a predetermined shape or having a size equal to or greater than a predetermined size. [0057] [0058] In accordance with the present invention, an intra filter can be applied to at least one of a plurality of reference lines. [0059] [0060] According to the present invention, an intra-prediction mode or an intra-prediction mode number can be determined adaptively according to a reference line used for the intra-prediction of a current block. [0061] [0062] The effects obtainable by the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by those skilled in the art from the following description. [0063] Description of the drawings [0064] [0065] Fig. 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention. [0066] [0067] Figure 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention. [0068] [0069] Fig. 3 is a diagram illustrating an example of a hierarchical partition of an encoding block based on a tree structure according to an embodiment of the present invention. [0070] [0071] Fig. 4 is a diagram illustrating a type of partition in which binary tree-based partition is allowed according to an embodiment of the present invention. [0072] [0073] Fig. 5 is a diagram illustrating an example in which only a binary tree-based partition of a predetermined type according to an embodiment of the present invention is allowed. [0074] [0075] Figure 6 is a diagram for explaining an example in which the information related to the allowed number of binary tree partitions is encoded / decoded, according to an embodiment to which the present invention is applied. [0076] [0077] Figure 7 is a diagram illustrating a partitioning mode applicable to a coding block according to an embodiment of the present invention. [0078] [0079] Figure 8 is a flow chart illustrating an interprediction method according to an embodiment of the present invention. [0080] [0081] Figure 9 is a diagram illustrating a process of deriving movement information from a current block when a blending mode is applied to a current block. [0082] Figure 10 illustrates a process of deriving movement information from a current block when an AMVP mode is applied to the current block. [0083] [0084] Figure 11 is a diagram showing a spatial fusion candidate of a current block. [0085] [0086] Figure 12 is a diagram showing a co-located block of a current block. [0087] [0088] Figure 13 is a diagram for explaining an example of how to obtain a movement vector of a temporary fusion candidate by scaling a movement vector of a co-located block. [0089] [0090] Figure 14 is a diagram showing an example of derivation of a merging candidate from a non-square block based on a square block. [0091] [0092] Fig. 15 is a diagram for explaining an example in which a fusion candidate is derived from a binary tree partitioned block based on a higher node block. [0093] [0094] Figure 16 is a diagram illustrating an example of determining the availability of a spatial fusion candidate according to a melting estimation region. [0095] [0096] Figure 17 is a flow chart illustrating processes for obtaining a residual sample according to an embodiment to which the present invention is applied. [0097] [0098] Mode of the invention [0099] [0100] A variety of modifications can be made to the present invention and there are several embodiments of the present invention, examples of which will now be given with reference to the drawings and will be described in detail. However, the present invention is not limited thereto and it can be interpreted that the examples of embodiments include all modifications, equivalents or substitutes of a technical concept and a technical scope of the present invention. Similar reference numbers refer to the similar element described in the drawings. [0101] The terms used in the memory, "first", "second", etc. they can be used to describe several components, but the components should not be considered as limited to the terms. The terms are only used to distinguish a component from another component. For example, the "first" component may be referred to as the "second" component without departing from the scope of the present invention, and the "second" component may similarly be referred to as the "first" component. The term 'and / or' includes a fusion of a plurality of elements or any of a plurality of terms. [0102] [0103] It will be understood that when simply referring to an element as 'connected to' or 'coupled to' another element without being 'directly connected to' or 'directly coupled to' another element in the present description, it may be 'directly connected to' or 'directly coupled to' another element or being connected or coupled to another element, having the other element intervening therebetween. In contrast, it should be understood that when an element is referred to as "directly coupled" or "directly connected" to another element, there are no intervening elements present. [0104] [0105] The terms used in the present specification are simply used to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular encompasses the expression of the plural, unless it has a clearly different meaning in the context. In the present specification, it should be understood that terms such as "including", "having", etc. they intend to indicate the existence of the characteristics, numbers, stages, actions, elements, parts or combinations thereof described in the report, and do not intend to exclude the possibility that one or more characteristics, numbers, stages, actions, elements, parts or combinations thereof may exist or be added. [0106] [0107] Next, they will describe preferred embodiments of the present invention in detail with reference to the accompanying drawings. In the following, the same constituent elements in the drawings are indicated with the same reference numerals, and a repeated description of the same elements will be omitted. [0108] [0109] Figure 1 is a block diagram illustrating a device for encoding a video according to an embodiment of the present invention. [0110] [0111] Referring to Figure 1, the device 100 for encoding a video may include: an image partition module 110, prediction modules 120 and 125, a transformation module 130, a quantization module 135, a reorganization module 160, an entropy coding module 165, an inverse quantization module 140, a reverse transformation module 145, a filter module 150 and a memory 155. [0112] [0113] The constituent parts shown in Figure 1 are shown independently to represent different characteristic functions in the device for encoding a video. Therefore, it does not mean that each constituent part is constituted in a separate hardware or software constituent unit. In other words, each constituent part includes each of the constituent parts listed for convenience. Therefore, at least two constituent parts of each constituent part can be combined to form a constituent part or a constituent part can be divided into a plurality of constituent parts to perform each function. The embodiment in which each constituent part is combined with the embodiment in which a constituent part is divided is also included in the scope of the present invention, if they do not depart from the essence of the present invention. [0114] [0115] In addition, some of the constituents may not be indispensable constituents that perform essential functions of the present invention, but may be selective constituents that improve only the performance thereof. The present invention can be implemented by including only the essential constituent parts to implement the essence of the present invention, except the constituents used to improve performance. The structure that includes only the indispensable constituents, except the selective constituents used to improve only the performance, is also included in the scope of the present invention. [0116] [0117] The image partitioning module 110 can divide an input image into one or more processing units. Here, the processing unit can be a prediction unit (PU), a transformation unit (TU) or a coding unit (CU). The image partition module 110 can divide an image into combinations of multiple coding units, prediction units and transformation units, and can encode an image by selecting a fusion of coding units, prediction units and transformation units with a predetermined criterion (for example, cost-dependent). [0118] [0119] For example, an image can be divided into several coding units. A recursive tree structure, such as a quad tree structure, can be used to divide an image into coding units. A coding unit that is divided into other coding units with an image or a larger coding unit such as a root may be partitioned with secondary nodes corresponding to the number of partitioned coding units. A coding unit that is no longer partitioned by a predetermined limitation serves as a leaf node. That is, when it is assumed that only one square partition is possible for one coding unit, one coding unit can be divided up to a further four coding units. [0120] [0121] Hereinafter, in the embodiment of the present invention, the coding unit may mean a unit that performs coding, or a unit that performs decoding. [0122] [0123] A prediction unit can be one of the partitions divided into a square or rectangular shape that has the same size in a single coding unit, or a prediction unit can be one of the partitioned partitions to have a different shape / size in one single coding unit. [0124] [0125] When a prediction unit submitted to intra-prediction based on a coding unit is generated and the coding unit is not the smallest coding unit, the intra-prediction can be performed without dividing the coding unit into multiple NxN prediction units. [0126] [0127] The prediction modules 120 and 125 may include an interpredication module 120 that performs the interprediction and an intraprediction module 125 that performs the intraprediction. It can be determined whether to perform the interprediction or the intra prediction for the prediction unit, and the detailed information (for example, an intra-prediction mode, a motion vector, a reference image, etc.) according to each prediction method. Here, the processing unit subject to prediction can be different from the processing unit for which the prediction method and detailed content is determined. For example, the prediction method, the prediction mode, etc. they can be determined by the prediction unit, and the transformation unit can perform the prediction. A residual value (residual block) can be entered between the generated prediction block and an original block to the transformation module 130. In addition, the information of the prediction mode, the information of the movement vector, etc. used for the prediction can be encoded with the residual value by the entropy coding module 165 and can be transmitted to a device to decode a video. When using a particular coding mode, it is possible to transmit to a device to decode video by encoding the original block as it is without generating the prediction block through the prediction modules 120 and 125. [0128] [0129] The interpredication module 120 can predict the prediction unit based on the information of at least one of a previous image or a back image of the current image, or it can predict the prediction unit based on the information of some regions encoded in the image current, in some cases. The interpredication module 120 may include a reference image interpolation module, a motion prediction module and a motion compensation module. [0130] [0131] The reference image interpolation module may receive reference image information from the memory 155 and may generate pixel information of a whole pixel or less than the entire pixel of the reference image. In the case of light pixels, an 8-lead DCT-based interpolation filter with different filter coefficients can be used to generate pixel information of a whole pixel or less than a whole pixel in a 1/4 pixel unit . In the case of chromatic signals, a 4-lead DCT-based interpolation filter having a different filter coefficient can be used to generate pixel information of a whole pixel or less than one whole pixel in a 1/8 unit pixel [0132] [0133] The motion prediction module can perform the motion prediction based on the reference image interpolated by the reference image interpolation module. As methods to calculate a motion vector, it is they can use several methods, such as a block matching algorithm based on full search (FBMA), a three-stage search (TSS), a new three-stage search algorithm (NTS), etc. The motion vector can have a motion vector value in a unit of 1/2 pixel or 1/4 pixel based on an interpolated pixel. The motion prediction module can predict a current prediction unit by changing the motion prediction method. Several methods can be used as motion prediction methods, such as the omission method, the fusion method, the AMVP (Advanced Motion Vector Prediction) method, the intrablock copy method, etc. [0134] [0135] The intra-prediction module 125 can generate a prediction unit based on reference pixel information adjacent to a current block which is pixel information in the current image. When the neighbor block of the current prediction unit is a block subject to interprediction and, therefore, a reference pixel is a pixel subject to interprediction, the reference pixel included in the block submitted to interprediction can be replaced by information from Reference pixels of a neighboring block subject to intra-prediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels can be used in place of non-available reference pixel information. [0136] [0137] Intraprediction prediction modes may include a directional prediction mode that uses reference pixel information depending on the direction of the prediction and a non-directional prediction mode that does not use directional information to make the prediction. A mode for predicting the luma information may be different from a mode for predicting the chroma information, and for predicting the chroma information one may use the intra-prediction information used to predict the luma information or the signal information of the luma. Luma predicted. [0138] [0139] When performing the intra-prediction, when the size of the prediction unit is the same as the size of the transformation unit, the prediction can be made in the prediction unit based on the pixels on the left, top left and the top of the prediction unit. However, when performing the intra-prediction, when the size of the prediction unit is different of the size of the transformation unit, the intra-prediction can be done using a reference pixel based on the transformation unit. In addition, intra-prediction using an NxN partition can be used only for the smallest coding unit. [0140] [0141] In the intra-prediction method, a prediction block can be generated after applying an AIS filter (intra-adaptive smoothing) to a reference pixel depending on the prediction modes. The type of AIS filter applied to the reference pixel may vary. To perform the intra-prediction method, an intra-prediction mode of the current prediction unit can be predicted from the intra-prediction mode of the prediction unit adjacent to the current prediction unit. In predicting the prediction mode of the current prediction unit by using predicted mode information from the neighboring prediction unit, when the intra prediction mode of the current prediction unit is the same as the intra prediction mode of the unit of neighboring prediction, the information indicates that the prediction modes of the current prediction unit and the neighboring prediction unit are equal to each other can be transmitted using predetermined signaling information. When the prediction mode of the current prediction unit is different from the prediction mode of the neighboring prediction unit, the entropy coding can be performed to encode the prediction mode information of the current block. [0142] [0143] In addition, a residual block that includes information about a residual value that is different between the predicted prediction unit and the original block of the prediction unit can be generated based on the prediction units generated by the prediction modules 120 and 125. The residual block generated can be introduced into the transformation module 130. [0144] [0145] The transformation module 130 can transform the residual block that includes the residual value information between the original block and the prediction unit generated by the prediction modules 120 and 125 by using a transformation method, such as discrete transform of cosine (DCT), discrete sine transform (DST), and KLT. The application of DCT, DST or KLT to transform the residual block can be determined according to the information of the intra-prediction mode of the prediction unit used to generate the residual block. [0146] The quantization module 135 can quantize the transformed values in a frequency domain by the transformation module 130. The quantization coefficients may vary according to the block or the importance of an image. The values calculated by the quantization module 135 can be provided to the inverse quantization module 140 and the reorganization module 160. [0147] [0148] The reorganization module 160 can rearrange the coefficients of the quantized residual values. [0149] [0150] The reorganization module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a coefficient scanning method. For example, the reorganization module 160 may scan from a DC coefficient to a coefficient in a high frequency domain using a zigzag scanning method to change the coefficients to be in the form of one-dimensional vectors. According to the size of the transformation unit and the intra-prediction mode, the exploration in vertical direction where the coefficients are explored in the form of two-dimensional blocks in the direction of the column or the exploration in horizontal direction where the coefficients in the form of blocks are explored Two-dimensional, you can use the direction of the row, instead of the zigzag scan. That is, the scanning method can be determined among the following: zigzag, vertical scan and horizontal scan, according to the size of the transformation unit and the intraprediction mode. [0151] [0152] The entropy coding module 165 can perform the entropy coding based on the values calculated by the reorganization module 160. Entropy coding can use various coding methods, e.g., Golomb exponential coding, variable length coding adapted to context (CAVLC) and binary arithmetic coding adapted to the context (CABAC). [0153] [0154] The entropy coding module 165 may encode a variety of information, such as residual value coefficient information and block type information of the coding unit, prediction mode information, partition unit information, unit information prediction, information of transformation unit, movement vector information, reference frame information, block interpolation information, filtering information, etc. of the reorganization module 160 and the prediction modules 120 and 125. [0155] [0156] The entropy coding module 165 can encode by entropy the input coefficients of the coding unit of the reorganization module 160. [0157] [0158] The inverse quantization module 140 can quantize inversely the values quantized by the quantization module 135 and the inverse transformation module 145 can transform the transformed values inversely by the transformation module 130. The residual value generated by the inverse quantization module 140 and the inverse transformation module 145 can be combined with the prediction unit predicted by a motion estimation module, a motion compensation module and the intra-prediction module of the prediction modules 120 and 125, so that a block can be generated rebuilt. [0159] [0160] The filter module 150 can include at least one of the following: an unblocking filter, a displacement correction unit and an adaptive loop filter (ALF). [0161] [0162] The unblocking filter can eliminate the distortion of the block that occurs due to the boundaries between the blocks in the reconstructed image. To determine if unlocking should be performed, the pixels included in several rows or columns in the block can be a basis for determining if the unlock filter is applied to the current block. When the unblocking filter is applied to the block, a strong filter or a weak filter can be applied, depending on the required unlocking filtering force. In addition, when applying the unblocking filter, filtering in horizontal direction and filtering in vertical direction can be processed in parallel. [0163] [0164] The offset correction module can correct the offset with the original image in a unit of one pixel in the image subject to unlocking. To perform offset correction on a particular image, it is possible to use a method to apply the displacement in consideration of the edge information of each pixel or a pixel partition method of an image in the predetermined number of regions, determining a region that is subject to make a displacement, and apply the displacement to the determined region. [0165] [0166] Adaptive loop filtering (ALF) can be performed according to the value obtained by comparing the filtered reconstructed image and the original image. The pixels included in the image can be divided into predetermined groups, a filter can be determined that will be applied to each of the groups and an individual filtering can be performed for each group. Information on whether to apply ALF and a light signal can be transmitted by coding units (CU). The shape and filter coefficient of a filter for ALF can vary depending on each block. In addition, the filter for ALF in the same form (fixed form) can be applied independently of the characteristics of the target block of the application. [0167] [0168] The memory 155 may store the reconstructed block or the calculated image through the filter module 150. The stored reconstructed block or image may be provided to the prediction modules 120 and 125 to perform the interprediction. [0169] [0170] Figure 2 is a block diagram illustrating a device for decoding a video according to an embodiment of the present invention. [0171] [0172] Referring to Figure 2, the device 200 for decoding a video can include: an entropy decoding module 210, a reorganization module 215, a reverse quantization module 220, a reverse transformation module 225, prediction modules 230 and 235, a filter module 240 and a memory 245. [0173] [0174] When a bit stream of video is input from the device to encode a video, the input bitstream can be decoded according to a reverse process of the device to encode a video. [0175] [0176] The entropy decoding module 210 can perform the entropy decoding according to an inverse entropy coding process of the entropy coding module of the device to encode a video. For example, according to the methods performed by the device to encode a video, several methods can be applied, such as Golomb's exponential coding, context-adaptive variable length coding (CAVLC) and binary arithmetic coding adapted to the context (CABAC). ). [0177] The entropy decoding module 210 can decode information about the intra-prediction and interprediction performed by the device to encode a video. [0178] [0179] The reorganization module 215 can perform a rearrangement in the entropy of the bit stream decoded by the entropy decoding module 210 based on the reorganization method used in the device to encode a video. The reorganization module can reconstruct and reorganize the coefficients in the form of one-dimensional vectors to the coefficient in the form of two-dimensional blocks. The reorganization module 215 can receive information related to the coefficient scan performed on the device to encode a video and can perform a reorganization through a method of inverse scanning of the coefficients according to the scan order performed on the device to encode a video. [0180] [0181] The inverse quantization module 220 can perform an inverse quantization based on a quantization parameter received from the device to encode a video and the reorganized coefficients of the block. [0182] [0183] The inverse transformation module 225 can perform the inverse transformation, that is, inverse DCT, inverse DST and inverse KLT, which is the inverse process of transformation, ie, DCT, DST and KLT, performed by the transformation module in the result of quantification by the device to encode a video. The inverse transformation can be performed depending on a transfer unit determined by the device to encode a video. The reverse transformation module 225 of the device for decoding a video can selectively perform transformation schemes (eg, DCT, DST and KLT) depending on multiple data, such as the prediction method, the size of the current block, the prediction direction, etc. [0184] [0185] The prediction modules 230 and 235 can generate a prediction block based on information about the prediction block generation received from the entropy decoding module 210 and the previously decoded image or block information received from the memory 245. [0186] As described above, just like the operation of the device to encode a video, when performing the intra-prediction, when the size of the prediction unit is the same as the size of the transformation unit, the intra-prediction can be performed in the unit of prediction based on the pixels located on the left, top left and top of the prediction unit. When performing the intra-prediction, when the size of the prediction unit is different from the size of the transformation unit, the intra-prediction can be performed using a reference pixel based on the transformation unit. In addition, intra-prediction using an NxN partition can be used only for the smallest coding unit. [0187] [0188] The prediction modules 230 and 235 may include a prediction unit determination module, an interpredication module and an intra-prediction module. The determination module of the prediction unit may receive a variety of information, such as information of the prediction unit, information of the prediction mode of an intraprediction method, information on the prediction of movement of an interprediction method, etc. From the entropy decoding module 210, it can divide a current coding unit into prediction units, and can determine whether interprediction or intra prediction is performed in the prediction unit. By using the information required in the interprediction of the current prediction unit received from the device for encoding a video, the interpredication module 230 can perform the interprediction in the current prediction unit according to the information of at least one previous image or a subsequent image of the current image, including the current prediction unit. Alternatively, interprediction can be performed based on information from some prereconstructed regions in the current image, including the current prediction unit. [0189] [0190] To perform the interprediction, it can be determined for the encoding unit which of an omission mode, a merge mode, an AMVP mode and a block copy mode is used as a prediction method of movement of the prediction unit included in the coding unit. [0191] [0192] The intra-prediction module 235 can generate a prediction block based on pixel information in the current image. When the prediction unit is a prediction unit subject to intraprediction, the prediction unit can be performed in based on the intra-prediction mode information of the prediction unit received from the device to encode a video. The intra-prediction module 235 may include an intra-adaptive smoothing filter (AIS), a reference pixel interpolation module and a DC filter. The AIS filter performs filtering on the reference pixel of the current block, and the filter application can be determined according to the prediction mode of the current prediction unit. The AIS filtering can be performed on the reference pixel of the current block by using the prediction mode of the prediction unit and the AIS filter information received from the device to encode a video. When the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter may not be applied. [0193] [0194] When the prediction mode of the prediction unit is a prediction mode in which the intra-prediction is performed based on the pixel value obtained by interpolating the reference pixel, the reference pixel interpolation module can interpolate the reference pixel to generate the reference pixel of a whole pixel or less than an integer pixel. When the prediction mode of the current prediction unit is a prediction mode in which a prediction block is generated without interpolation of the reference pixel, the reference pixel can not be interpolated. The DC filter can generate a prediction block through the filtering when the prediction mode of the current block is a CC mode. [0195] [0196] The reconstructed block or image can be provided to the filter module 240. The filter module 240 can include the deblocking filter, the displacement correction module and the ALF. [0197] [0198] Information on whether or not the unlock filter applies to the corresponding block or image, and information on which of the filters, strong or weak, is applied when the unlock filter is applied can be received from the device to encode a video. The unlock filter of the device for decoding a video can receive information about the unlock filter of the device to encode a video, and can perform a unlock filtering in the corresponding block. [0199] [0200] The offset correction module can perform offset correction on the reconstructed image based on the type of offset correction and the offset value information applied to an image at perform the coding. [0201] [0202] The ALF can be applied to the coding unit based on information on whether the ALF should be applied, the ALF coefficient information, etc., received from the device to encode a video. The ALF information can be provided as included in a particular set of parameters. [0203] [0204] The memory 245 can store the reconstructed image or block for use as a reference image or block and can provide the reconstructed image to an output module. [0205] [0206] As described above, in the embodiment of the present invention, for convenience of explanation, the coding unit is used as a term representing a unit for coding, but the coding unit can serve as a unit that performs decoding as well as the coding. [0207] [0208] In addition, a current block may represent an objective block to be encoded / decoded. And, the current block can represent a coding tree block (or a coding tree unit), a coding block (or a coding unit), a transformation block (or a transformation unit), a block of prediction (or a prediction unit), or the like, depending on a coding / decoding stage. [0209] [0210] An image can be encoded / decoded by dividing into base blocks that have a square shape or a non-square shape. At this time, the base block can be referred to as a coding tree unit. The coding tree unit can be defined as a coding unit of the largest size allowed within a sequence or segment. Information on whether the encoding tree unit has a square shape or if it has a non-square shape or information on the size of the encoding tree unit can be signaled through a set of sequence parameters, a set of image parameters or a segment header. The unit of the coding tree can be divided into smaller partitions. At this time, if it is assumed that the depth of a partition generated by dividing the coding tree unit is 1, the depth of a partition generated by dividing the partition that has the depth 1 can be defined as 2. That is, a partition generated by dividing a partition having a depth k in the coding tree unit can be defined as having a depth k + 1. [0211] [0212] An arbitrary size partition generated by dividing an encoding tree unit can be defined as a coding unit. The coding unit can be recursively divided or divided into base units for prediction, quantization, transformation or loop filtering, and the like. For example, a partition of arbitrary size generated by dividing the coding unit can be defined as a coding unit, or it can be defined as a transformation unit or a prediction unit, which is a base unit for prediction, quantification , transformation or filtering in loop and the like. [0213] [0214] The partition of a coding tree unit or a coding unit can be made based on at least one of a vertical line and a horizontal line. In addition, the number of vertical lines or horizontal lines that divide the coding tree unit or the coding unit may be at least one or more. For example, the coding tree unit or coding unit can be divided into two partitions using a vertical line or a horizontal line, or the coding tree unit or coding unit can be divided into three partitions using two lines vertical or two horizontal lines. Alternatively, the coding tree unit or the coding unit can be divided into four partitions having a length and a width of 1/2 using a vertical line and a horizontal line. [0215] [0216] When a coding tree unit or a coding unit is divided into a plurality of partitions using at least one vertical line or at least one horizontal line, the partitions may have a uniform size or a different size. Alternatively, any partition can have a different size than the remaining partitions. [0217] [0218] In the embodiments described below, it is assumed that a coding tree unit or a coding unit is divided into a quad tree structure or a binary tree structure. However, it is also possible to divide a coding tree unit or a coding unit using a greater number of vertical lines or a greater number of horizontal lines. [0219] [0220] Fig. 3 is a diagram illustrating an example of a hierarchical partition of an encoding block based on a tree structure according to an embodiment of the present invention. [0221] [0222] An input video signal is decoded in predetermined block units. Such a predetermined unit for decoding the video input signal is a coding block. The coding block can be a unit that performs prediction, transformation and intra / inter quantification. In addition, a prediction mode (e.g., intra-prediction mode or interprediction mode) is determined in a unit of a coding block, and the prediction blocks included in the coding block may share the determined prediction mode. The coding block can be a square or non-square block having an arbitrary size in a range of 8x8 to 64x64, or it can be a square or non-square block having a size of 128x128, 256x256 or more. [0223] [0224] Specifically, the coding block can be divided hierarchically according to at least one of a quad tree and a binary tree. Here, the quad-tree-based partition can mean that a 2Nx2N coding block is divided into four NxN coding blocks, and the binary-tree-based partition can mean that one coding block is divided into two coding blocks. Even if partition based on a binary tree is performed, there may be a square-shaped coding block at the bottom depth. [0225] [0226] The partition based on binary trees can be performed symmetrically or asymmetrically. The coding block divided according to the binary tree can be a square block or a non-square block, such as a rectangular shape. For example, a partition type in which partition based on a binary tree is allowed may comprise at least one of a symmetric type of 2NxN (non-square horizontal directional coding unit) or Nx2N (vertical square non-square encoding unit) , asymmetric type of nLx2N, nRx2N, 2NxnU or 2NxnD. [0227] [0228] The partition based on binary tree can be allowed in a limited way to a partition of symmetric or asymmetric type. In this case, the construction of the coding tree unit with square blocks may correspond to the four-tree CU partition, and the construction of the coding tree unit with non-square symmetric blocks may correspond to the binary tree division. The construction of the coding tree unit with square blocks and symmetrical non-square blocks may correspond to the CU quadruple and binary tree partition. [0229] [0230] The partition based on a binary tree can be done in a coding block where the quad-tree-based partition is no longer performed. The quad-tree-based partition can no longer be performed in the partitioned coding block based on the binary tree. [0231] [0232] In addition, the partition of a lower depth can be determined according to the partition type of a higher depth. For example, if partition based on a binary tree is allowed in two or more depths, only the same type as the binary tree partition of the upper depth in the lower depth can be allowed. For example, if the partition based on binary tree in the upper depth is done with the type 2NxN, the partition based on binary tree in the lower depth is also done with the type 2NxN. Alternatively, if the binary tree-based partition at the top depth is performed with the Nx2N type, the binary tree-based partition at the bottom depth is also performed with the Nx2N type. [0233] [0234] On the contrary, it is also possible to allow, at a lower depth, only a different type of a binary tree partition of a higher depth. [0235] [0236] It may be possible to limit only a specific type of partition based on binary tree to be used for sequence, segment, coding tree unit or coding unit. As an example, only type 2NxN or partition type Nx2N based on binary tree can be allowed for the encoding tree unit. An available partition type can be predefined in an encoder or a decoder. Or the information about the type of available partition or about the unavailable partition type can be encoded and then sent through a bit stream. [0237] [0238] Figure 5 is a diagram illustrating an example in which only one type is allowed specific partition based on binary tree. Figure 5A shows an example in which only partition type Nx2N based on binary tree is allowed, and Figure 5B shows an example in which only partition type 2NxN based on binary tree is allowed. To implement the adaptive partition based on the quad tree or binary tree, you can use the information that indicates the quad-tree based partition, the information about the size / depth of the coding block that the quad-tree-based partition allows, the information which indicates the partition based on binary tree, the information about the size / depth of the coding block that allows the partition based on binary tree, the information about the size / depth of the coding block in which the partition based on binary is not allowed binary tree, the information on whether the partition based on binary tree is done in a vertical direction or a horizontal direction, etc. [0239] [0240] In addition, you can obtain information about the number of times a binary tree partition is allowed, a depth to which the binary tree partition is allowed, or the number of depths to which the binary tree partition is allowed for a coding tree unit or a specific coding unit. The information may be encoded in a unit of a coding tree unit or a coding unit, and may be transmitted to a decoder through a bit stream. [0241] [0242] For example, a syntax 'max_binary_depth_idx_minus1' indicating a maximum depth to which the partition of the binary tree is allowed can be encoded / decoded through a bit stream. In this case, max_binary_depth_idx_minus1 1 can indicate the maximum depth at which the partition of the binary tree is allowed. [0243] [0244] With reference to the example shown in Figure 6, in Figure 6, the splitting of the binary tree has been done for a coding unit having a depth of 2 and a coding unit having a depth of 3. Consequently, at minus one of the data indicating the number of times the binary tree partition has been performed in the coding tree unit (ie 2 times), the information indicating the maximum depth at which the partition was allowed of the binary tree in the unit of the coding tree (that is, the depth 3), or the number of depths in which the binary tree was partitioned in the encoding tree unit (ie, 2 (depth 2 and depth 3)) can be encoded / decoded through a bit stream. [0245] [0246] As another example, at least one of the data on the number of times the partition of the binary tree is allowed, the depth at which the partition of the binary tree is allowed, or the number of depths to which the partition is allowed of the binary tree can be obtained for each sequence or each sector. For example, the information may be encoded in a unit of a sequence, an image or a division unit and transmitted through a bit stream. Accordingly, at least one of the partition numbers of the binary tree in a first segment, the maximum depth at which the partition of the binary tree is allowed in the first segment, or the number of depths in which the partition of the binary tree performed in the first segment can be the difference of a second segment. For example, in the first segment, the binary tree partition can be allowed only for one depth, whereas, in the second segment, the binary tree partition can be allowed for two depths. [0247] [0248] As another example, the number of times the partition of the binary tree is allowed, the depth at which the partition of the binary tree is allowed, or the number of depths to which the partition of the binary tree is allowed can be configured in such a way different according to a time-level identifier (TemporalID) of a segment or an image. Here, the temporary level identifier (TemporalID) is used to identify each of a plurality of video layers having a scalability of at least one of view, spatial, temporal or quality. [0249] [0250] As shown in Figure 3, the first coding block 300 with the partition depth (depth of division) of k can be divided into multiple second blocks of coding based on the quad tree. For example, the second coding blocks 310 to 340 can be square blocks that are half the width and half the height of the first coding block, and the partition depth of the second coding block can be increased to k + 1. . [0251] [0252] The second coding block 310 with the partition depth of k + 1 can be divided into multiple third coding blocks with the partition depth of k + 2. The partition of the second coding block 310 can be realized using selectively one of the four trees and the binary tree depending on a partition method. Here, the partition method can be determined based on at least one of the information indicating the partition based on quad trees and the information indicating the partition based on binary trees. [0253] [0254] When the second coding block 310 is divided according to the quadruple tree, the second coding block 310 can be divided into four third coding blocks 310a having half the width and half of the second coding block, and the partition depth of the third coding block 310a can be increased to k + 2. In contrast, when the second coding block 310 is divided as a function of the binary tree, the second coding block 310 can be divided into two third coding blocks. Here, each of the two third coding blocks can be a non-square block having an average width and half the height of the second coding block, and the depth of the partition can be increased to k + 2. The second coding block can be determined as a non-square block of a horizontal or vertical direction that depends on a partition address, and the partition address can be determined according to the information on whether the partition based on a binary tree is made in a vertical direction or a horizontal direction. [0255] [0256] Meanwhile, the second coding block 310 can be determined as a leaf coding block that is no longer partitioned as a function of the quad tree or the binary tree. In this case, the sheet coding block can be used as a prediction block or a transformation block. [0257] [0258] Like the partition of the second coding block 310, the third coding block 310a can be determined as a sheet coding block, or it can be further divided based on the quad tree or the binary tree. [0259] [0260] Meanwhile, the third coding block 310b partitioned based on the binary tree can be further divided into the coding blocks 310b-2 of a vertical direction or the coding blocks 310b-3 of a horizontal direction based on the binary tree, and the Depth of partition of the relevant coding blocks can be increased to k + 3. Alternatively, the third coding block 310b can be determined as a sheet coding block 310b-1 that is no longer partitioned based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transformation block. However, the above partitioning process can be performed in a limited manner depending on at least one of the information about the size / depth of the coding block that permits quad-tree-based partitioning, information about size / depth is allowed of the coding block of that binary tree based on the partition, and the information on the size / depth of the coding block of that partition based on binary tree is not allowed. [0261] [0262] A number of a candidate representing a size of a coding block can be limited to: a predetermined number, or a size of a coding block in a predetermined unit that can have a fixed value. As an example, the size of the coding block in a sequence or in an image can be limited to 256x256, 128x128 or 32x32. The information indicating the size of the coding block in the sequence or in the image can be indicated by a sequence header or an image header. [0263] [0264] As a result of the partition based on a quadruple tree and a binary tree, a coding unit can be represented as a square or rectangular shape of an arbitrary size. [0265] [0266] A coding block is encoded using at least one of the modes of omission, intraprediction, interprediction or omission method. Once a coding block is determined, a prediction block can be determined through the predictive partition of the coding block. The predictive partition of the coding block can be realized by a partition mode (Part_mode) indicating a partition type of the coding block. A size or shape of the prediction block can be determined according to the partition mode of the coding block. For example, a size of a prediction block determined according to the partition mode may be equal to or smaller than a size of a coding block. [0267] [0268] Fig. 7 is a diagram illustrating a partition mode that can be applied to a coding block when the coding block is encoded by interprediction. [0269] [0270] When a coding block is encoded by interprediction, one of the 8 partition modes can be applied to the coding block, as in the example shown in Figure 4. [0271] [0272] When a coding block is encoded by intra-prediction, a partition mode PART_2Nx2N or partition mode PART_NxN can be applied to the coding block. [0273] [0274] PART_NxN can be applied when a coding block has a minimum size. Here, the minimum size of the coding block can be predefined in an encoder and a decoder. Or, the information about the minimum size of the coding block can be signaled through a bit stream. For example, the minimum size of the coding block can be signaled through a segment header, so that the minimum size of the coding block can be defined per segment. [0275] [0276] In general, a prediction block can have a size of 64x64 to 4x4. However, when a coding block is encoded by interprediction, it can be restricted that the prediction block does not have a size of 4x4 to reduce the bandwidth of the memory when the motion compensation is performed. [0277] [0278] Figure 8 is a flow chart illustrating an interprediction method according to an embodiment of the present invention. [0279] [0280] Referring to Figure 8, the movement information of a current block S810 is determined. The movement information of the current block may include at least one of a movement vector related to the current block, a reference image index of the current block, or an interprediction address of the current block. [0281] [0282] The movement information of the current block can be obtained based on at least one of the information signaled through a bit stream or movement information of a neighboring block adjacent to the current block. [0283] Figure 9 is a diagram illustrating a process of deriving movement information from a current block when a blending mode is applied to a current block. [0284] [0285] If the merge mode is applied to the current block, a spatial merge candidate can be derived from a spatial neighbor block of the current block S910. The spatial neighbor block can mean at least one of the blocks adjacent to a left, an upper part or a corner (for example, at least one of a top left corner, a top right corner or a bottom left corner) of the current block. [0286] [0287] The movement information of a spatial fusion candidate can be configured to be the same as the movement information of the neighboring spatial block. [0288] [0289] A temporary merge candidate can be derived from a temporary neighbor block of the current block S920. The temporary neighbor block can mean a collocated block included in a placed image. The placed image has an image order count (POC) different from the current image, including the current block. The placed image can be determined as an image having a predefined index in a list of reference images or can be determined by a flag signaled from a bit stream. The temporary neighbor block can be determined for a block that has the same position and size as the current block in the placed image or a block adjacent to the block that has the same position and size as the current block. For example, at least one of a block that includes the coordinates of the center of the block that has the same position and size as the current block in the placed image or a block adjacent to a lower right limit of the block that can be determined as the neighboring block temporary. [0290] [0291] The movement information of the temporary fusion candidate can be determined based on the movement information of the temporal neighbor block. For example, a movement vector of the temporal fusion candidate can be determined based on a movement vector of the temporal neighbor block. In addition, an interprediction address of the temporary merge candidate can be configured to be the same as an interprediction address of the temporary neighbor block. Without However, a reference image index of the temporary fusion candidate can have a fixed value. For example, the reference image index of the temporary merge candidate can be set to '0'. [0292] [0293] With reference to figures 11 to 16, an example of how to derive fusion candidates will be described in more detail. [0294] [0295] Thereafter, the list of merge candidates that includes the space fusion candidate and the S930 temporary merger candidate can be generated. If the number of merging candidates included in the list of merger candidates is less than the maximum number of merger candidates, a merged merge candidate combining two or more merger candidates or a merger candidate who has a merger can be included. zero motion vector (0, 0) in the list of fusion candidates. [0296] [0297] When the list of merge candidates is generated, at least one of the merging candidates included in the list of merge candidates can be specified based on an S940 fusion candidate index. [0298] [0299] The movement information of the current block can be configured to be equal to the movement information of the fusion candidate specified by the S950 fusion candidate index. For example, when the spatial fusion candidate is selected by the fusion candidate index, the movement information of the current block can be configured to be the same as the movement information of the neighboring spatial block. Alternatively, when the temporary merge candidate is selected by the merge candidate index, the movement information of the current block can be configured to be the same as the movement information of the temporary neighboring block. [0300] [0301] Figure 10 illustrates a process of deriving movement information from a current block when an AMVP mode is applied to the current block. [0302] [0303] When the AMVP mode is applied to the current block, at least one of the interprediction addresses of a reference image index of the current block can be decoded from a bitstream S1010. That is, when the AMVP mode is applied, at least one of the interprediction addresses or the image index of The reference of the current block can be determined according to the information encoded by the bit stream. [0304] [0305] A spatial motion vector candidate can be determined based on a motion vector of a spatial adjacent block of the current block S1020. The spatial motion vector candidate can include at least one of a first spatial motion vector candidate derived from a higher neighbor block of the current block and a second spatial motion vector candidate derived from a left neighbor block of the current block. Here, the upper neighbor block may include at least one of the blocks adjacent to an upper or upper right corner of the current block, and the left neighbor block of the current block may include at least one of the blocks adjacent to a left or left corner bottom of the current block. A block adjacent to a top left corner of the current block can be treated as the upper neighbor block or as the left neighbor block. [0306] [0307] When the reference images between the current block and the neighboring spatial block are different, it is possible to obtain a spatial motion vector by scaling the motion vector of the spatial neighbor block. [0308] [0309] A candidate temporal motion vector can be determined based on a movement vector of a temporary neighbor block of the current block S1030. If the reference images between the current block and the temporary neighbor block are different, it is possible to obtain a temporary motion vector by scaling the motion vector of the neighboring neighbor block. [0310] [0311] A list of motion vector candidates can be generated which includes the candidate of space motion vectors and the candidate of time motion vectors S1040. [0312] [0313] When the list of motion vector candidates is generated, at least one of the motion vector candidates included in the list of motion vector candidates can be specified based on information that specifies at least one of the list of vector candidates of movement S1050. [0314] The candidate of the movement vector specified by the information is set as a prediction value of the movement vector of the current block. And, a motion vector of the current block is obtained by adding a motion vector difference value to the motion vector prediction value 1060. At this time, the difference value of the motion vector can be analyzed from the bit stream . [0315] [0316] When the movement information of the current block is obtained, the movement compensation for the current block can be made based on the obtained movement information S820. More specifically, the movement compensation for the current block can be made depending on the interprediction direction, the reference image index and the movement vector of the current block. [0317] [0318] The maximum number of merge candidates that can be included in the list of merge candidates can be pointed out through the bit stream. For example, the information indicating the maximum number of fusion candidates can be signaled through a sequence parameter or an image parameter. [0319] [0320] The number of space fusion candidates and temporary merger candidates that can be included in the list of merger candidates can be determined according to the maximum number of merger candidates. Specifically, the number of space fusion candidates and the number of temporary fusion candidates can be adjusted so that the total number of space fusion candidates and the temporary fusion candidates does not exceed the maximum number N of fusion candidates. For example, when the maximum number of fusion candidates is 5, 4 selected from 5 space fusion candidates of the current block can be added to the list of fusion candidates and 1 selected from 2 temporary fusion candidates of the current block can be added to the list of fusion candidates. The number of temporary merger candidates can be adjusted according to the number of space fusion candidates added to the list of merging candidates, or the number of space fusion candidates can be adjusted based on the number of temporary merger candidates added to the list of fusion candidates. If the number of merger candidates added to the list of merger candidates is less than 5, a merged merge candidate combining at least two merger candidates can be added to the list of merger candidates or a merger candidate who has a movement vector of (0, 0) can be added to the list of candidates of fusion. [0321] [0322] Merge candidates can be added to the list of merge candidates in a predefined order. For example, the list of fusion candidates can be generated in the order of the space fusion candidate, the temporary fusion candidate, the combined fusion candidate and the fusion candidate that has a zero movement vector. It is also possible to define the order to add the merge candidate different from the order listed. [0323] [0324] Figure 11 is a diagram showing a spatial fusion candidate of a current block. The fusion candidate of the current block can be derived can be derived from a spatial neighbor block of the current block. For example, the spatial fusion candidate may include an A1 fusion candidate derived from an adjacent block to the left of the current block, a B1 fusion candidate derived from a block adjacent to an upper part of the current block, an A0 fusion candidate derived from a block adjacent to the lower left of the current block, a fusion candidate B0 derived from a block adjacent to the upper right part of the current block or a fusion candidate B2 derived from a block adjacent to the upper left part of the block current. Space fusion candidates can be searched in a predetermined order. For example, a search order for space fusion candidates may be in A1, B1, B0, A0, and B2. At this time, B2 can be included in the list of fusion candidates only when there is no block corresponding to A1, B1, B0 or A0, or when a block corresponding to A1, B1, B0 or A0 is not available. For example, if a block corresponding to A1, B1, B0 or A0 is coded in intraprediction, the block can be determined as not available. Alternatively, if the number of space fusion candidates and the temporary fusion candidates included in the list of merger candidates is less than or equal to the maximum number of merger candidates, it is possible to add B2 to the list of merger candidates as the Next order of temporary merger candidates. [0325] [0326] To obtain the temporary fusion candidate, you can select a placed image (col_pic) in a list of reference images. The placed image can be an image in the reference image list that has a smaller difference in the image order number (POC) with the current image or an image specified by a reference image index. The temporary merge candidate can be derived based on a block co-located from the current block in the placed block. At this time, the information in the list of reference images used to specify the joint location block can be encoded in a unit of a block, a division header or an image and can be transmitted through the bit stream. [0327] [0328] Figure 12 is a diagram showing a co-located block of a current block. The joint location block indicates a block corresponding to a position of the current block in the placed image. For example, the joint location block can be determined as a block H adjacent to the lower right part of a block having the same coordinates and size as the current block in the placed image, or a block C3 including a central position of the block . At this time, block C3 can be determined to the co-located block when the position of block H is not available, when block H is encoded by the intra-prediction or when block H is outside of an LCU in which the block is included. current. [0329] [0330] Alternatively, a block adjacent to a corner of the block having the same coordinate and size as the current block in the placed image can be determined as the co-located block, or a block having a coordinate within the block can be determined as the co-located block. For example, in the example shown in Figure 12, a block TL, BL or C0 can be determined as co-located blocks. [0331] [0332] It is also possible to derive a plurality of temporary fusion candidates for the current block from a plurality of co-located blocks. [0333] [0334] A movement vector of the temporary fusion candidate can be obtained by scaling a movement vector of the co-located block in the placed image. Figure 13 is a diagram for explaining an example of how to obtain a movement vector of a temporary fusion candidate by scaling a movement vector of a co-located block. The movement vector of the temporary fusion candidate can be obtained by scaling the movement vector of the co-located block using at least one of a temporary distance tb between the current image and the reference image of the current block and the temporal distance td between the image placed and the reference image of the block collocated. [0335] [0336] A fusion candidate can be derived based on a block having a predetermined shape or a block having a size equal to or greater than a predetermined size. Accordingly, if the current block does not have the predetermined shape or if the size of the current block is smaller than the predetermined size, the fusion candidate of the current block can be derived based on the block of the predetermined form that includes the current block or the block of the default or larger size, including the current block. For example, a fusion candidate for a coding unit of a non-square shape can be derived based on a coding unit of a square shape that includes the coding unit of the non-square shape. [0337] [0338] Figure 14 is a diagram showing an example of deriving a merging candidate from a non-square block based on a square block. [0339] [0340] The fusion candidate for the non-square block can be derived based on the square block that includes the non-square block. For example, in the example shown in Figure 14, a fusion candidate of a non-square coding block 0 and a non-square coding block 1 can be derived on the basis of a square block. Accordingly, the coding block 0 and the coding block 1 can use at least one of the spatial fusion candidates A0, A1, A2, A3 and A4 derived from the square block. [0341] [0342] Although not shown in the figure, it is also possible to derive a temporary fusion candidate for a non-square block based on a square block. For example, the coding block 0 and the coding block 1 can use a temporary merge candidate derived from a temporary neighbor block determined based on a square block. [0343] [0344] Alternatively, at least one of the spatial fusion candidates and the temporary fusion candidate can be derived based on a square block, and the other can be derived based on a non-square block. For example, the coding block 0 and the coding block 1 can use the same spatial fusion candidate derived from a square block, while the coding block 0 and the block of coding 1 may use different temporary fusion candidates, each of which is derived by its position. [0345] [0346] In the example described above, it is explained that the fusion candidate is derived based on the square block, but it is also possible to derive the fusion candidate based on the non-square block in the predetermined manner. For example, if the current block is a non-square block of form 2Nxn (where n is 1 / 2N), the merge candidate for the current block can be derived based on a non-square block of form 2NxN, and if the current block is a non-square block of form nx2N, the fusion candidate for the current block can be derived based on a non-square block of the form Nx2N. [0347] [0348] The information indicating the shape of a block or the size of a block that is the basis for deriving a fusion candidate can be signaled through the bitstream. For example, the block shape information indicating a square shape or a non-square shape can be signaled through the bitstream. Alternatively, the encoder / decoder can derive a merge candidate in a predefined rule as a block having a predefined shape or a block having a size equal to or greater than a predefined size. [0349] [0350] In another example, the fusion candidate can be derived based on a quad tree division unit. Here, the quad tree division unit can represent a block unit that is divided by a quad tree. For example, if the current block is divided by a binary tree, the merging candidate of the current block can be derived based on a higher node block that is divided by a quad tree. If there are no top nodes divided by the quad tree for the current block, the merge candidate for the current block can be derived based on an LCU that includes the current block or a block of a specific size. [0351] [0352] Fig. 15 is a diagram for explaining an example in which a fusion candidate is derived from a binary tree partitioned block based on a higher node block. [0353] [0354] A block 0 partitioned binary tree of a non-square shape and a block Binary tree partitioning 1 of a non-square shape can use at least one of the spatial fusion candidates A0, A1, A2, A3 and A4 derivatives based on the upper block of the quad-tree unit. Accordingly, block 0 and block 1 can use the same spatial fusion candidates. [0355] [0356] Also, a binary tree 2 partitioned block of a non-square shape, a binary tree 3 partitioned block of a non-square shape and a binary tree partitioned block 4 of a non-square shape can use at least one of B0, B1, B2, B3 and B4, derived on the basis of the upper block of the quadruple tree unit. Therefore, blocks 2, 3 and 4 can use the same spatial fusion candidates. [0357] [0358] Although not shown in the figure, a temporary fusion candidate for a binary tree partitioned block can also be derived based on the upper block based on the quadruple tree. Accordingly, block 0 and block 1 can use the same temporary merge candidate derived from the temporary neighbor block determined based on the quad tree block unit. Block 2, block 3 and block 4 can also use the same temporary fusion candidate derived from the temporary neighbor block determined based on the quad tree block unit. [0359] [0360] Alternatively, it is also possible to derive at least one of the spatial fusion candidates and the temporary fusion candidate based on the binary tree block unit and the other can be derived on the basis of the quad tree block unit. For example, block 0 and block 1 may use the same spatial merge candidate derived from the quad tree block unit, but may use different temporary merge candidates, each of which is derived based on its location. [0361] [0362] The information that indicates whether a fusion candidate should be derived based on a partitioned quad-tree unit or a partitioned binary tree unit can be signaled through the bit stream. According to the information, it can be determined if the fusion candidate of the partitioned block of the binary tree is derived based on the block of the quadruple tree partitioned top node. Alternatively, the encoder / decoder can derive the fusion candidate based on the partitioned tree of quadruple tree or the partitioned unit of a binary tree, according to predefined rules. [0363] [0364] As described above, the fusion candidate for the current block can be derived in a unit of a block (for example, in a unit of a code block or a prediction block) or a predefined unit. At this time, if any of the space fusion candidates of the current block exist in a predetermined region, it can be determined that it is not available and can then be excluded from the space fusion candidate. For example, if a parallel processing region is defined for parallel processing between blocks, it can be determined that the fusion candidate included in the parallel processing region among the spatial fusion candidates of the current block is not available. The parallel processing region may be referred to as a melting estimation region (MER). The blocks in the parallel processing region have the advantage of being able to merge in parallel. [0365] [0366] The melting estimation region may have a square shape or a non-square shape. The melting estimation region of the non-square shape can be limited to a predetermined shape. For example, the melting estimation region of the non-square shape can take the form of 2NxN or Nx2N. [0367] [0368] At least one of the information indicating the shape of the fusion estimation region or the information indicating the size of the fusion estimation region can be signaled through the bitstream. For example, information on the shape or size of the fusion estimation region may be indicated by a sector header, an image parameter or a sequence parameter. [0369] [0370] The information indicating the shape of the fusion estimation region can be a 1-bit signaling. For example, the syntax 'isrectagular_mer_flag' which indicates whether the melting estimation region has a square shape or a non-square shape can be signaled through the bitstream. If a value of isrectagular_mer_flag is 1, it indicates that the melting estimation region has a non-square shape and if a value of isrectagular_mer_flag is 0, it indicates that the melting estimation region has a square shape. [0371] If the melting estimation region has a non-square shape, at least one of the information related to a width, a height or a relation between the width and the height can be signaled through the bitstream. On the basis of the information, the size and / or shape of the fusion estimation region without square shape can be derived. [0372] [0373] Figure 16 is a diagram illustrating an example of determining the availability of a spatial fusion candidate according to a melting estimation region. [0374] [0375] If the fusion estimation region has a Nx2N shape, and the melting estimation region has a predetermined size, the spatial fusion candidates B0 and B3 included in the same melting estimation region as a block 1 can not be used as candidates of spatial fusion for block 1. Accordingly, the spatial fusion candidate of block 1 may be composed of at least one of B1, B2 and B4, except fusion candidates B0 and B3. [0376] [0377] Similarly, the C0 spatial fusion candidate included in the same fusion estimation region as a block 3 can not be used as a spatial fusion candidate for block 3. Therefore, the spatial fusion candidate of block 3 may be composed of at least one of C1, C2, C3 and C4, except the fusion candidate C0. [0378] [0379] The above embodiments have been described mainly in the decoding process, but the coding process can be performed in the same order or in reverse order as described. [0380] [0381] Figure 17 is a flow chart illustrating processes for obtaining a residual sample according to an embodiment to which the present invention is applied. [0382] [0383] First, a residual coefficient of a current block S1710 can be obtained. A decoder can obtain a residual coefficient through a coefficient scanning method. For example, the decoder can perform a coefficient scan using a jig-zag scan, a top right scan, a vertical scan or a horizontal scan, and can obtain residual coefficients in the form of a two-dimensional block. [0384] [0385] An inverse quantization can be performed on the residual coefficient of the current block S1720. [0386] [0387] It is possible to determine whether an inverse transformation in the residual decay coefficient of the current block S1730 should be omitted. Specifically, the decoder can determine if the inverse transformation should be omitted in at least one of the horizontal or vertical direction of the current block. When determining the application of the inverse transformation in at least one of the horizontal or vertical directions of the current block, a residual sample of the current block can be obtained by inversely transforming the dequantized residual coefficient of the current block. Here, the inverse transformation can be performed using at least one of DCT, DST and KLT. [0388] [0389] When the inverse transform is omitted in both the horizontal and vertical directions of the current block, the inverse transform is not performed in the horizontal and vertical direction of the current block. In this case, the residual sample of the current block can be obtained by scaling the dequantized residual coefficient with a predetermined value. [0390] [0391] Bypassing the inverse transform in the horizontal direction means that the inverse transform is not done in the horizontal direction, but the inverse transform is done in the vertical direction. At this time, the scale can be made in the horizontal direction. [0392] [0393] Bypassing the inverse transform in the vertical direction means that the inverse transform is not done in the vertical direction, but the inverse transform is done in the horizontal direction. At this time, the scale can be performed in the vertical direction. [0394] [0395] It can be determined whether or not an inverse transformation skip technique can be used for the current block, depending on the partition type of the current block. For example, if the current block is generated through a partition based on a binary tree, the reverse transformation skip scheme may be restricted for the current block. Therefore, when the current block is generated through the partition based on binary tree, the residual sample of the current block can be obtained by inverse transformation of the current block. In addition, when the current block is generated through the partition based on a binary tree, the encoding / decoding of the information indicating whether the reverse transformation is omitted (for example, transform_skip_flag) can be omitted. [0396] [0397] Alternatively, when the current block is generated through the partition based on a binary tree, it is possible to limit the inverse transformation omission scheme to at least one of the horizontal or vertical direction. Here, the direction in which the inverse transform bypass scheme is limited can be determined according to the decoded information of the bitstream, or it can be determined adaptively depending on at least one of a size of the current block, a form of the current block, or an intra-prediction mode of the current block. [0398] [0399] For example, when the current block is a non-square block that has a width greater than a height, the reverse transformation skip scheme can be allowed only in the vertical direction and restricted in the horizontal direction. That is, when the current block is 2NxN, the inverse transformation is performed in the horizontal direction of the current block, and the inverse transformation can be performed selectively in the vertical direction. [0400] [0401] On the other hand, when the current block is a non-square block that has a height greater than a width, the reverse transformation omission scheme can be allowed only in the horizontal direction and restricted in the vertical direction. That is, when the current block is Nx2N, the inverse transformation is performed in the vertical direction of the current block, and the inverse transformation can be performed selectively in the horizontal direction. [0402] [0403] In contrast to the previous example, when the current block is a non-square block that has a width greater than a height, the inverse transformation skip scheme can be allowed only in the horizontal direction, and when the current block is a non-square block having a height greater than a width, the inverse transformation skip scheme can be allowed only in the vertical direction. [0404] The information that indicates whether or not to omit the inverse transform with respect to the horizontal direction or the information indicating whether the inverse transformation with respect to the vertical direction should be omitted can be signaled through a bit stream. For example, the information that indicates whether or not to omit the inverse transformation in the horizontal direction is a 1-bit indicator, 'hor_transform_skip_flag', and the information that indicates whether the inverse transformation in the vertical direction should be omitted is an indicator 1 bit 'ver_transform_skip_flag'. The encoder can encode at least one of 'hor_transform_skip_flag' or 'ver_transform_skip_flag' according to the shape of the current block. In addition, the decoder can determine whether or not the reverse transformation is omitted in the horizontal direction or in the vertical direction using at least one of "hor_transform_skip_flag" or "ver_transform_skip_flag". [0405] [0406] It can be configured to override the inverse transformation for any current block address based on the partition type of the current block. For example, if the current block is generated through a partition based on a binary tree, the inverse transformation in the horizontal or vertical direction can be omitted. That is, if the current block is generated by a partition based on a binary tree, it can be determined that the inverse transformation for the current block is omitted in at least one horizontal or vertical direction without encoding / decoding information (for example, transform_skip_flag, hor_transform_skip_flag, ver_transform_skip_flag) that indicates whether the inverse transformation of the current block is omitted or not. [0407] [0408] Although the embodiments described above have been described on the basis of a series of stages or flowcharts, they do not limit the order of the time series of the invention, and may be performed simultaneously or in different orders as necessary. In addition, each of the components (e.g., units, modules, etc.) that constitute the block diagram in the embodiments described above can be implemented by a hardware or software device, and a plurality of components. Or a plurality of components can be combined and implemented by a single hardware or software device. The embodiments described above can be implemented in the form of program instructions that can be executed through various computer components and recorded on a computer readable recording medium. The computer readable recording medium may include one of the following elements, or a combination of the same: program commands, data files, data structures and the like. Examples of computer readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical recording media such as CD-ROM and DVD; magneto-optical means such as optical discs; and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory and the like. The hardware device can be configured to operate as one or more software modules to perform the process according to the present invention, and vice versa. [0409] [0410] Industrial applicability [0411] [0412] The present invention can be applied to electronic devices that can encode / decode a video.
权利要求:
Claims (13) [1] 1. A method for decoding a video, comprising the method: derive a space fusion candidate for a current block; generate a list of fusion candidates for the current block based on the space fusion candidate; obtain movement information for the current block based on the list of merge candidates; Y perform motion compensation for the current block using the movement information, wherein, if the current block does not have a predefined shape or does not have a size equal to or greater than a predefined size, the fusion candidate of the current block is derived based on a block having the predefined shape or having an equal size or greater than the predefined size, the block comprising the current block. [2] 2. The method of claim 1, wherein the predefined shape is a square shape. [3] The method of claim 2, wherein the current block has the same spatial merge candidate as a neighboring block included in the block of the square shape with the current block. [4] The method of claim 1, wherein if the current block and the spatial fusion candidate are included in the same fusion estimation region, it is determined that the spatial fusion candidate is not available. [5] The method of claim 4, wherein the melting estimation region has a square shape or a non-square shape. [6] The method of claim 5, wherein if the fusion estimation region has a non-square shape, a number of candidate forms of fusion candidates that the fusion estimation region can have are restricted by a predefined number. [7] 7. A method for encoding a video, comprising the method: derive a space fusion candidate for a current block; generate a list of fusion candidates for the current block based on the space fusion candidate; obtain movement information for the current block based on the list of merge candidates; Y perform motion compensation for the current block using the movement information, wherein, if the current block does not have a predefined shape or does not have a size equal to or greater than a predefined size, the fusion candidate of the current block is derived based on a block having the predefined shape or having an equal size or greater than the predefined size, the block comprising the current block. [8] The method of claim 7, wherein the predefined shape is in a square shape. [9] The method of claim 8, wherein the current block has the same spatial merge candidate as a neighboring block included in the block of the square shape with the current block. [10] The method of claim 7, wherein if the current block and the spatial fusion candidate are included in the same fusion estimation region, it is determined that the spatial fusion candidate is not available. [11] The method of claim 10, wherein the melting estimation region has a square shape or a non-square shape. [12] The method of claim 11, wherein if the melting estimation region has a non-square shape, the number of candidate forms, of fusion candidates, which the melting estimation region can have, is restricted to a number predefined [13] 13. An apparatus for decoding a video, the apparatus comprising: a prediction unit to derive a spatial fusion candidate for a current block, to generate a list of fusion candidates for the current block based on the spatial fusion candidate, to obtain movement information for the current block based on the list of merge candidates and to perform compensation movement for the current block using the movement information, in which, if the current block does not have a predefined shape or does not have a size equal to or greater than a predefined size, the fusion candidate of the current block is derived based on a block having the predefined shape or having a size equal to or greater than the predefined size, the block comprising the current block.
类似技术:
公开号 | 公开日 | 专利标题 ES2703607B2|2021-05-13|Method and apparatus for processing video signals ES2739668B1|2021-12-03|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2800509B1|2021-12-21|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2699691A2|2019-02-12|Video signal processing method and device ES2724568B2|2021-05-19|Method and apparatus for processing a video signal ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL ES2688624A2|2018-11-05|Method and apparatus for processing video signal ES2711474A2|2019-05-03|Method and device for processing video signal ES2699749B2|2020-07-06|Method and apparatus for processing a video signal KR20180051424A|2018-05-16|Method and apparatus for processing a video signal ES2703458A2|2019-03-08|Video signal processing method and device CA3039155A1|2018-04-12|Method and apparatus for processing video signal ES2737845B2|2021-05-19|METHOD AND APPARATUS TO PROCESS VIDEO SIGNAL KR20180059367A|2018-06-04|Method and apparatus for processing a video signal ES2711223A2|2019-04-30|Method and device for processing video signal ES2711473A2|2019-05-03|Method and apparatus for processing video signal ES2711230A2|2019-04-30|Method and apparatus for processing video signal US20210195189A1|2021-06-24|Method and apparatus for processing video signal CA3065922A1|2019-03-14|Method and device for processing video signal EP3522531A1|2019-08-07|Method for processing picture based on intra-prediction mode and apparatus for same ES2711209A2|2019-04-30|Method and device for processing video signal CN112166614A|2021-01-01|Method and apparatus for processing video signal GB2595015A|2021-11-17|Method and apparatus for processing video signal
同族专利:
公开号 | 公开日 US20190182491A1|2019-06-13| ES2703458R1|2020-06-04| EP3496400A1|2019-06-12| CN109644267A|2019-04-16| KR20180015599A|2018-02-13| EP3496400A4|2020-02-19| WO2018026222A1|2018-02-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2012081879A1|2010-12-14|2012-06-21|Oh Soo Mi|Method for decoding inter predictive encoded motion pictures| WO2012081949A2|2010-12-17|2012-06-21|한국전자통신연구원|Method and apparatus for inter prediction| US9143795B2|2011-04-11|2015-09-22|Texas Instruments Incorporated|Parallel motion estimation in video coding| US9866859B2|2011-06-14|2018-01-09|Texas Instruments Incorporated|Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding| US9131239B2|2011-06-20|2015-09-08|Qualcomm Incorporated|Unified merge mode and adaptive motion vector prediction mode candidates selection| CN107197303B|2011-09-09|2020-03-06|Lg 电子株式会社|Inter-frame prediction method and device| CN108632608A|2011-09-29|2018-10-09|夏普株式会社|Picture decoding apparatus, picture decoding method and picture coding device|CN110662036A|2018-06-29|2020-01-07|北京字节跳动网络技术有限公司|Limitation of motion information sharing| EP3834422A4|2018-08-10|2021-09-15|Huawei Technologies Co., Ltd.|Coding method, device and system with merge mode| EP3837841A4|2018-09-03|2021-10-20|Huawei Technologies Co., Ltd.|Coding method, device, system with merge mode| WO2020103943A1|2018-11-22|2020-05-28|Beijing Bytedance Network Technology Co., Ltd.|Using collocated blocks in sub-block temporal motion vector prediction mode| WO2020238838A1|2019-05-25|2020-12-03|Beijing Bytedance Network Technology Co., Ltd.|Constraints on merge candidates in intra-block copy coded blocks|
法律状态:
2019-03-08| BA2A| Patent application published|Ref document number: 2703458 Country of ref document: ES Kind code of ref document: A2 Effective date: 20190308 | 2020-06-04| EC2A| Search report published|Ref document number: 2703458 Country of ref document: ES Kind code of ref document: R1 Effective date: 20200528 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR20160099177|2016-08-03| PCT/KR2017/008415|WO2018026222A1|2016-08-03|2017-08-03|Video signal processing method and device| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|